146 research outputs found

    Data citation and reuse practice in biodiversity - Challenges of adopting a standard citation model

    Get PDF
    © 2015 American Institute of Physics Inc.. All rights reserved. Openly available research data promotes reproducibility in science and results in higher citation rates for articles published with data in biological and social sciences. Even though biodiversity is one of the fields where data is frequently reused, information about how data is reused and cited is not often openly accessible from research data repositories. This study explores data citation and reuse practices in biodiversity by using openly available metadata for 43,802 datasets indexed in the Global Biodiversity Information Facility (GBIF). Quantitative analysis of dataset types and citation counts suggests that the number of studies making use of openly available biodiversity data has been increasing in a steady manner. Citation rates vary for different types of datasets based on the quality of data, and similarly to articles, it takes 2-3 years to accrue most citations for datasets. Content analysis of a random sample of unique citing articles (n=101) for 437 cited datasets in a random sample of 1000 datasets suggests that best practice for data citation is yet to be established. 26.7% of articles are mentioned the dataset in references, 12.9% are mentioned in data access statements in addition to the methods section, and only 2% are mentioned in all three sections, which is important for automatic extraction of citation information. Citation practice was inconsistent especially when a large number of subsets (12—50) were used. This calls for adoption of a standard citation model for this field to provide proper attribution when using subsets of data.Published versio

    Quasi-Birth-Death Processes, Tree-Like QBDs, Probabilistic 1-Counter Automata, and Pushdown Systems

    Get PDF
    We begin by observing that (discrete-time) Quasi-Birth-Death Processes (QBDs) are equivalent, in a precise sense, to probabilistic 1-Counter Automata (p1CAs), and both Tree-Like QBDs (TL-QBDs) and Tree-Structured QBDs (TS-QBDs) are equivalent to both probabilistic Pushdown Systems (pPDSs) and Recursive Markov Chains (RMCs). We then proceed to exploit these connections to obtain a number of new algorithmic upper and lower bounds for central computational problems about these models. Our main result is this: for an arbitrary QBD, we can approximate its termination probabilities (i.e., its GG matrix) to within ii bits of precision (i.e., within additive error 1/2i1/2^i), in time polynomial in \underline{both} the encoding size of the QBD and in ii, in the unit-cost rational arithmetic RAM model of computation. Specifically, we show that a decomposed Newton's method can be used to achieve this. We emphasize that this bound is very different from the well-known ``linear/quadratic convergence'' of numerical analysis, known for QBDs and TL-QBDs, which typically gives no constructive bound in terms of the encoding size of the system being solved. In fact, we observe (based on recent results) that for the more general TL-QBDs such a polynomial upper bound on Newton's method fails badly. Our upper bound proof for QBDs combines several ingredients: a detailed analysis of the structure of 1-counter automata, an iterative application of a classic condition number bound for errors in linear systems, and a very recent constructive bound on the performance of Newton's method for strongly connected monotone systems of polynomial equations. We show that the quantitative termination decision problem for QBDs (namely, ``is Gu,v≄1/2G_{u,v} \geq 1/2?'') is at least as hard as long standing open problems in the complexity of exact numerical computation, specifically the square-root sum problem. On the other hand, it follows from our earlier results for RMCs that any non-trivial approximation of termination probabilities for TL-QBDs is sqrt-root-sum-hard

    Artificial intelligence to support publishing and peer review: a summary and review

    Get PDF
    Technology is being developed to support the peer review processes of journals, conferences, funders, universities, and national research evaluations. This literature and software summary discusses the partial or complete automation of several publishing-related tasks: suggesting appropriate journals for an article, providing quality control for submitted papers, finding suitable reviewers for submitted papers or grant proposals, reviewing, and review evaluation. It also discusses attempts to estimate article quality from peer review text and scores as well as from post-publication scores but not from bibliometric data. The literature and existing examples of working technology show that automation is useful for helping to find reviewers and there is good evidence that it can sometimes help with initial quality control of submitted manuscripts. Much other software supporting publishing and editorial work exists and is being used, but without published academic evaluations of its efficacy. The value of artificial intelligence (AI) to support reviewing has not been clearly demonstrated yet, however. Finally, whilst peer review text and scores can theoretically have value for post-publication research assessment, it is not yet widely enough available to be a practical evidence source for systematic automation

    CFA2: a Context-Free Approach to Control-Flow Analysis

    Full text link
    In a functional language, the dominant control-flow mechanism is function call and return. Most higher-order flow analyses, including k-CFA, do not handle call and return well: they remember only a bounded number of pending calls because they approximate programs with control-flow graphs. Call/return mismatch introduces precision-degrading spurious control-flow paths and increases the analysis time. We describe CFA2, the first flow analysis with precise call/return matching in the presence of higher-order functions and tail calls. We formulate CFA2 as an abstract interpretation of programs in continuation-passing style and describe a sound and complete summarization algorithm for our abstract semantics. A preliminary evaluation shows that CFA2 gives more accurate data-flow information than 0CFA and 1CFA.Comment: LMCS 7 (2:3) 201

    Reviewing, indicating, and counting books for modern research evaluation systems

    Get PDF
    In this chapter, we focus on the specialists who have helped to improve the conditions for book assessments in research evaluation exercises, with empirically based data and insights supporting their greater integration. Our review highlights the research carried out by four types of expert communities, referred to as the monitors, the subject classifiers, the indexers and the indicator constructionists. Many challenges lie ahead for scholars affiliated with these communities, particularly the latter three. By acknowledging their unique, yet interrelated roles, we show where the greatest potential is for both quantitative and qualitative indicator advancements in book-inclusive evaluation systems.Comment: Forthcoming in Glanzel, W., Moed, H.F., Schmoch U., Thelwall, M. (2018). Springer Handbook of Science and Technology Indicators. Springer Some corrections made in subsection 'Publisher prestige or quality

    The end of the beginning: a reflection on the first five years of the HRI conference

    Get PDF
    This study presents a historical overview of the International Conference on Human Robot Interaction (HRI). It summarizes its growth, internationalization and collaboration. Rankings for countries, organizations and authors are provided. Furthermore, an analysis of the military funding for HRI papers is performed. Approximately 20% of the papers are funded by the US Military. The proportion of papers from the US is around 65% and the dominant role of the US is only challenged by the strong position of Japan, in particular by the contributions by ATR

    COVID-19 publications: Database coverage, citations, readers, tweets, news, Facebook walls, Reddit posts

    Get PDF
    © 2020 The Authors. Published by MIT Press. This is an open access article available under a Creative Commons licence. The published version can be accessed at the following link on the publisher’s website: https://doi.org/10.1162/qss_a_00066The COVID-19 pandemic requires a fast response from researchers to help address biological, medical and public health issues to minimize its impact. In this rapidly evolving context, scholars, professionals and the public may need to quickly identify important new studies. In response, this paper assesses the coverage of scholarly databases and impact indicators during 21 March to 18 April 2020. The rapidly increasing volume of research, is particularly accessible through Dimensions, and less through Scopus, the Web of Science, and PubMed. Google Scholar’s results included many false matches. A few COVID-19 papers from the 21,395 in Dimensions were already highly cited, with substantial news and social media attention. For this topic, in contrast to previous studies, there seems to be a high degree of convergence between articles shared in the social web and citation counts, at least in the short term. In particular, articles that are extensively tweeted on the day first indexed are likely to be highly read and relatively highly cited three weeks later. Researchers needing wide scope literature searches (rather than health focused PubMed or medRxiv searches) should start with Dimensions (or Google Scholar) and can use tweet and Mendeley reader counts as indicators of likely importance

    Are citations from clinical trials evidence of higher impact research? An analysis of ClinicalTrials.gov

    Get PDF
    An important way in which medical research can translate into improved health outcomes is by motivating or influencing clinical trials that eventually lead to changes in clinical practice. Citations from clinical trials records to academic research may therefore serve as an early warning of the likely future influence of the cited articles. This paper partially assesses this hypothesis by testing whether prior articles referenced in ClinicalTrials.gov records are more highly cited than average for the publishing journal. The results from four high profile general medical journals support the hypothesis, although there may not be a cause-and effect relationship. Nevertheless, it is reasonable for researchers to use citations to their work from clinical trials records as partial evidence of the possible long-term impact of their research

    Do altmetric scores reflect article quality? Evidence from the UK Research Excellence Framework 2021

    Get PDF
    Altmetrics are web-based quantitative impact or attention indicators for academic articles that have been proposed to supplement citation counts. This article reports the first assessment of the extent to which mature altmetrics from Altmetric.com and Mendeley associate with individual article quality scores. It exploits expert norm-referenced peer review scores from the UK Research Excellence Framework 2021 for 67,030+ journal articles in all fields 2014–2017/2018, split into 34 broadly field-based Units of Assessment (UoAs). Altmetrics correlated more strongly with research quality than previously found, although less strongly than raw and field normalized Scopus citation counts. Surprisingly, field normalizing citation counts can reduce their strength as a quality indicator for articles in a single field. For most UoAs, Mendeley reader counts are the best altmetric (e.g., three Spearman correlations with quality scores above 0.5), tweet counts are also a moderate strength indicator in eight UoAs (Spearman correlations with quality scores above 0.3), ahead of news (eight correlations above 0.3, but generally weaker), blogs (five correlations above 0.3), and Facebook (three correlations above 0.3) citations, at least in the United Kingdom. In general, altmetrics are the strongest indicators of research quality in the health and physical sciences and weakest in the arts and humanities
    • 

    corecore